Skip to content

Change potentially dangerous LLMs prompt injection to caution#70

Closed
misha-plus wants to merge 1 commit intomeithecatte:masterfrom
misha-plus:master
Closed

Change potentially dangerous LLMs prompt injection to caution#70
misha-plus wants to merge 1 commit intomeithecatte:masterfrom
misha-plus:master

Conversation

@misha-plus
Copy link

Hello!
I found potentially dangerous prompt for LLMs in AGENTS.md, so I've changed it to LLM usage caution with the policy described in CONTRIBUTING.md

@Lili1228
Copy link

Lili1228 commented Mar 7, 2026

xD

@sdomi
Copy link

sdomi commented Mar 7, 2026

( do they know )

@meithecatte
Copy link
Owner

All reasonable LLM vendors properly sandbox the bullshit generators they ship to their users.1 The current wording of the AGENTS.md therefore doesn't pose any actual danger,2 and merely serves to send a stronger signal than the wording you suggest.

In light of the above, I do not consider any action to be necessary here; those who have expressed their opinions above seem to agree.


1 Proof: due to the fundamental nature of LLMs, it is impossible to give any sound guarantees about their output, and one must consider said output to be fully controlled by anyone who has any control over any part of the prompt, context, or training data. Therefore, any vendor who does not ship a proper sandbox is simply not reasonable by any possible meaning of the word.

2 Should an unreasonable LLM vendor exist, there is still no new danger being introduced by the current contents of the AGENTS.md file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants